Numerous single-image super-resolution algorithms have been proposed in theliterature, but few studies address the problem of performance evaluation basedon visual perception. While most super-resolution images are evaluated byfullreference metrics, the effectiveness is not clear and the requiredground-truth images are not always available in practice. To address theseproblems, we conduct human subject studies using a large set ofsuper-resolution images and propose a no-reference metric learned from visualperceptual scores. Specifically, we design three types of low-level statisticalfeatures in both spatial and frequency domains to quantify super-resolvedartifacts, and learn a two-stage regression model to predict the quality scoresof super-resolution images without referring to ground-truth images. Extensiveexperimental results show that the proposed metric is effective and efficientto assess the quality of super-resolution images based on human perception.
展开▼